7 research outputs found

    Linguistic Analysis of Non-ITG Word Reordering between Language Pairs with Different Word Order Typologies

    Get PDF
    X110Yscopu

    ECAPA-TDNN Embeddings for Speaker Diarization

    Full text link
    Learning robust speaker embeddings is a crucial step in speaker diarization. Deep neural networks can accurately capture speaker discriminative characteristics and popular deep embeddings such as x-vectors are nowadays a fundamental component of modern diarization systems. Recently, some improvements over the standard TDNN architecture used for x-vectors have been proposed. The ECAPA-TDNN model, for instance, has shown impressive performance in the speaker verification domain, thanks to a carefully designed neural model. In this work, we extend, for the first time, the use of the ECAPA-TDNN model to speaker diarization. Moreover, we improved its robustness with a powerful augmentation scheme that concatenates several contaminated versions of the same signal within the same training batch. The ECAPA-TDNN model turned out to provide robust speaker embeddings under both close-talking and distant-talking conditions. Our results on the popular AMI meeting corpus show that our system significantly outperforms recently proposed approaches

    ADAPTIVE KNOWLEDGE DISTILLATION BASED ON ENTROPY

    No full text
    Knowledge distillation (KD) approach is widely used in the deep learning field mainly for model size reduction. KD utilizes soft labels of teacher model, which contain the dark-knowledge that one-hot ground-truth does not have. This knowledge can improve the performance of already saturated student model. In case of multiple-teacher models, generally, the same weighted average (interpolated training) of multiple-teacher's labels is applied to KD training. However, if the knowledge characteristics among teachers are somewhat different, the interpolated training can be at risk of crushing each knowledge characteristics and can also raise noise component. In this paper, we propose an entropy based KD training, which utilizes the teacher model labels with lower entropy at a larger rate among the various teacher models. The proposed method shows a better performance than the conventional KD training scheme in automatic speech recognition.N

    Improving fluency by reordering target constituents using MST parser in English-to-Japanese phrase-based SMT

    No full text
    We propose a reordering method to improve the fluency of the output of the phrase-based SMT (PBSMT) system. We parse the transla-tion results that follow the source language or-der into non-projective dependency trees, then reorder dependency trees to obtain fluent tar-get sentences. Our method ensures that the translation results are grammatically correct and achieves major improvements over PB-SMT using dependency-based metrics.

    ECAPA-TDNN embeddings for speaker diarization

    No full text
    Learning robust speaker embeddings is a crucial step in speaker diarization. Deep neural networks can accurately capture speaker discriminative characteristics and popular deep embeddings such as x-vectors are nowadays a fundamental component of modern diarization systems. Recently, some improvements over the standard TDNN architecture used for x-vectors have been proposed. The ECAPA-TDNN model, for instance, has shown impressive performance in the speaker verification domain, thanks to a carefully designed neural model. In this work, we extend, for the first time, the use of the ECAPA-TDNN model to speaker diarization. Moreover, we improved its robustness with a powerful augmentation scheme that concatenates several contaminated versions of the same signal within the same training batch. The ECAPA-TDNN model turned out to provide robust speaker embeddings under both close-talking and distant-talking conditions. Our results on the popular AMI meeting corpus show that our system significantly outperforms recently proposed approaches

    SpeechBrain: A General-Purpose Speech Toolkit

    No full text
    PreprintSpeechBrain is an open-source and all-in-one speech toolkit. It is designed to facilitate the research and development of neural speech processing technologies by being simple, flexible, user-friendly, and well-documented. This paper describes the core architecture designed to support several tasks of common interest, allowing users to naturally conceive, compare and share novel speech processing pipelines. SpeechBrain achieves competitive or state-of-the-art performance in a wide range of speech benchmarks. It also provides training recipes, pretrained models, and inference scripts for popular speech datasets, as well as tutorials which allow anyone with basic Python proficiency to familiarize themselves with speech technologies
    corecore